本质语言允许用户在上述抽象级别指定约束问题,在该抽象级别进行约束建模决策。 Essence规范通过魅力自动建模工具精制到约束模型,该工具采用了一套细化规则。但是,本质是一种丰富的语言,其中有许多等同的方法来指定给定的问题。因此,用户可以省略域属性或抽象类型的使用,从而产生更少的细化规则,因此可以从中选择的减少的输出模型集。本文解决了在输入精华规范的变化面前自动恢复此信息以增加输出约束模型质量的稳健性。我们提出了可以更改决策变量的类型或添加缩小其域的属性的重构规则。我们展示了这种方法在模型的数量和质量方面的功效可以与原版相比,从转化的规格中产生。
translated by 谷歌翻译
联合学习通常用于容易获得标签的任务(例如,下一个单词预测)。放松这种约束需要设计无监督的学习技术,该技术可以支持联合培训的理想特性:稳健性对统计/系统异质性,可伸缩性与参与者数量以及沟通效率。关于该主题的先前工作集中在直接扩展集中式的自我监督学习技术上,这些学习技术并非旨在具有上面列出的属性。为了解决这种情况,我们提出了乐团,这是一种新颖的无监督联盟学习技术,利用联邦的层次结构来协调分布式的聚类任务,并将客户数据对客户数据的全球始终划分为可区分的群集。我们显示了管弦乐队中的算法管道可确保在线性探针下良好的概括性能,从而使其在广泛的条件下胜过替代技术,包括异质性,客户次数,参与率和本地时期的变化。
translated by 谷歌翻译
改善疾病的护理标准是关于更好的治疗方法,反过来依赖于寻找和开发新药。然而,药物发现是一个复杂且昂贵的过程。通过机器学习的方法采用了利用域固有的互连性质的药物发现知识图的创建。基于图形的数据建模,结合知识图形嵌入式提供了更直观的域表示,适用于推理任务,例如预测缺失链路。一个这样的例子将产生对给定疾病的可能相关基因的排名列表,通常被称为目标发现。因此,这是关键的,即这些预测不仅是相关的,而且是生物学上的有意义的。然而,知识图形可以直接偏向,由于集成的底层数据源,或者由于图形构造中的建模选择,其中的一个结果是某些实体可以在拓扑上超越。我们展示了知识图形嵌入模型可能受到这种结构不平衡的影响,导致无论上下文都要高度排名的密集连接实体。我们在不同的数据集,模型和预测任务中提供对此观察的支持。此外,我们展示了如何通过随机,生物学上无意义的信息扰乱图形拓扑结构以人为地改变基因的等级。这表明这种模型可能会受到实体频率而不是在关系中编码的生物学信息的影响,当实体频率不是基础数据的真实反射时,创建问题。我们的结果突出了数据建模选择的重要性,并强调了从业者在解释模型输出和知识图形组合期间时要注意这些问题。
translated by 谷歌翻译
数据增强是自然语言处理(NLP)模型的鲁棒性评估的重要组成部分,以及增强他们培训的数据的多样性。在本文中,我们呈现NL-Cogmenter,这是一种新的参与式Python的自然语言增强框架,它支持创建两个转换(对数据的修改)和过滤器(根据特定功能的数据拆分)。我们描述了框架和初始的117个变换和23个过滤器,用于各种自然语言任务。我们通过使用其几个转换来分析流行自然语言模型的鲁棒性来证明NL-Upmenter的功效。基础架构,Datacards和稳健性分析结果在NL-Augmenter存储库上公开可用(\ url {https://github.com/gem-benchmark/nl-augmenter})。
translated by 谷歌翻译
随着深度神经网络的兴起,解释这些网络预测的挑战已经越来越识别。虽然存在许多用于解释深度神经网络的决策的方法,但目前没有关于如何评估它们的共识。另一方面,鲁棒性是深度学习研究的热门话题;但是,在最近,几乎没有谈论解释性。在本教程中,我们首先呈现基于梯度的可解释性方法。这些技术使用梯度信号来分配对输入特征的决定的负担。后来,我们讨论如何为其鲁棒性和对抗性的鲁棒性在具有有意义的解释中扮演的作用来评估基于梯度的方法。我们还讨论了基于梯度的方法的局限性。最后,我们提出了在选择解释性方法之前应检查的最佳实践和属性。我们结束了未来在稳健性和解释性融合的地区研究的研究。
translated by 谷歌翻译
药物发现和发展是一个复杂和昂贵的过程。正在研究机器学习方法,以帮助提高药物发现管道多个阶段的有效性和速度。其中,使用知识图表(kg)的那些在许多任务中具有承诺,包括药物修复,药物毒性预测和靶基因疾病优先级。在药物发现kg中,包括基因,疾病和药物在内的关键因素被认为是实体,而它们之间的关系表示相互作用。但是,为了构建高质量的KG,需要合适的数据。在这篇综述中,我们详细介绍了适用于构建聚焦KGS的药物发现的公开使用来源。我们的目标是帮助引导机器学习和kg从业者对吸毒者发现领域应用新技术,但是谁可能不熟悉相关的数据来源。通过严格的标准选择数据集,根据包含内部包含的主要信息类型,并基于可以提取的信息来进行分类以构建kg。然后,我们对现有的公共药物发现KGS进行了比较分析,并评估了文献中所选择的激励案例研究。此外,我们还提出了众多和与域及其数据集相关的众多挑战和问题,同时突出了关键的未来研究方向。我们希望本综述将激励KGS在药物发现领域的关键和新兴问题中使用。
translated by 谷歌翻译
Accurate determination of a small molecule candidate (ligand) binding pose in its target protein pocket is important for computer-aided drug discovery. Typical rigid-body docking methods ignore the pocket flexibility of protein, while the more accurate pose generation using molecular dynamics is hindered by slow protein dynamics. We develop a tiered tensor transform (3T) algorithm to rapidly generate diverse protein-ligand complex conformations for both pose and affinity estimation in drug screening, requiring neither machine learning training nor lengthy dynamics computation, while maintaining both coarse-grain-like coordinated protein dynamics and atomistic-level details of the complex pocket. The 3T conformation structures we generate are closer to experimental co-crystal structures than those generated by docking software, and more importantly achieve significantly higher accuracy in active ligand classification than traditional ensemble docking using hundreds of experimental protein conformations. 3T structure transformation is decoupled from the system physics, making future usage in other computational scientific domains possible.
translated by 谷歌翻译
Modelling and forecasting real-life human behaviour using online social media is an active endeavour of interest in politics, government, academia, and industry. Since its creation in 2006, Twitter has been proposed as a potential laboratory that could be used to gauge and predict social behaviour. During the last decade, the user base of Twitter has been growing and becoming more representative of the general population. Here we analyse this user base in the context of the 2021 Mexican Legislative Election. To do so, we use a dataset of 15 million election-related tweets in the six months preceding election day. We explore different election models that assign political preference to either the ruling parties or the opposition. We find that models using data with geographical attributes determine the results of the election with better precision and accuracy than conventional polling methods. These results demonstrate that analysis of public online data can outperform conventional polling methods, and that political analysis and general forecasting would likely benefit from incorporating such data in the immediate future. Moreover, the same Twitter dataset with geographical attributes is positively correlated with results from official census data on population and internet usage in Mexico. These findings suggest that we have reached a period in time when online activity, appropriately curated, can provide an accurate representation of offline behaviour.
translated by 谷歌翻译
Feature selection helps reduce data acquisition costs in ML, but the standard approach is to train models with static feature subsets. Here, we consider the dynamic feature selection (DFS) problem where a model sequentially queries features based on the presently available information. DFS is often addressed with reinforcement learning (RL), but we explore a simpler approach of greedily selecting features based on their conditional mutual information. This method is theoretically appealing but requires oracle access to the data distribution, so we develop a learning approach based on amortized optimization. The proposed method is shown to recover the greedy policy when trained to optimality and outperforms numerous existing feature selection methods in our experiments, thus validating it as a simple but powerful approach for this problem.
translated by 谷歌翻译
Variational autoencoders model high-dimensional data by positing low-dimensional latent variables that are mapped through a flexible distribution parametrized by a neural network. Unfortunately, variational autoencoders often suffer from posterior collapse: the posterior of the latent variables is equal to its prior, rendering the variational autoencoder useless as a means to produce meaningful representations. Existing approaches to posterior collapse often attribute it to the use of neural networks or optimization issues due to variational approximation. In this paper, we consider posterior collapse as a problem of latent variable non-identifiability. We prove that the posterior collapses if and only if the latent variables are non-identifiable in the generative model. This fact implies that posterior collapse is not a phenomenon specific to the use of flexible distributions or approximate inference. Rather, it can occur in classical probabilistic models even with exact inference, which we also demonstrate. Based on these results, we propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility. This model class resolves the problem of latent variable non-identifiability by leveraging bijective Brenier maps and parameterizing them with input convex neural networks, without special variational inference objectives or optimization tricks. Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.
translated by 谷歌翻译